3 results
13 - Retroaction: how indicators feed back onto quantified actors
-
- By Alain Desrosières, Institut National de la Statistique et des Etudes Economiques (INSEE)
- Edited by Richard Rottenburg, Martin Luther-Universität Halle-Wittenberg, Germany, Sally E. Merry, New York University, Sung-Joon Park, Martin Luther-Universität Halle-Wittenberg, Germany, Johanna Mugler, Universität Bern, Switzerland
-
- Book:
- The World of Indicators
- Published online:
- 05 October 2015
- Print publication:
- 15 September 2015, pp 329-353
-
- Chapter
- Export citation
-
Summary
Introduction
Are quantitative indicators an instrument of emancipation or an instrument of oppression? In line with the philosophy of Enlightenment, they were generally perceived as a means of giving society a mirror-image of itself so that it could move towards greater justice. Today, in the world of neoliberal economics, they are mostly seen as an excuse to fuel individualism and competition between individuals, particularly through the performance indicators involved in management techniques, such as benchmarking. They can have direct effects, repercussions, qualified here as feedback, on those in charge of the statistical monitoring of unemployment. For example, the director of the French national statistics bureau, INSEE, was suspended for having ‘badly managed’ the conflict over unemployment figures. Deep budget cuts to the statistics bureau were announced. In spite of that, public statistics in France still enjoyed a good reputation among their users: economic actors, journalists, trade unionists, teachers and researchers. The media published numerous opinion pieces deploring what was perceived as a threat to dismantle the system. A commonly used metaphor was ‘breaking the thermometer in order to treat the fever’.
Shortly thereafter, a young INSEE researcher was struck by a revealing incident. While marching in a trade union demonstration against government policy to dismantle public services, she was soliciting demonstrators to show their support by signing a petition. To her surprise she was told: ‘Your statistics are only used to control us, police us, and make our working conditions worse.’ Again in 2009, academics, researchers and health workers were up in arms against the ‘reforms’ being applied to their activities, which involved quantified evaluations of their ‘performance’. This would lead, as they saw it, to the dispossession of their specific skills for the benefit of ‘New Public Management’, which relies heavily on the use of quantitative indicators. A culture of dissent emerged against the generalization of quantification among academic physicians. Resistance against quantitative evaluation was one of its keywords.
Spring 2009 was also a season for other demands of a completely different kind. The French government asked the eminent economists Amartya Sen, Joseph Stiglitz and Jean-Paul Fitoussi to propose revisions for the calculation of Gross Domestic Product (GDP). They suspected that GDP was a poor measure of the ‘wealth’ generated by a nation within a year. Activist researchers had already anticipated this request, which was given a great deal of media coverage.
31 - Managing the Economy
- from PART IV - SOCIAL SCIENCE AS DISCOURSE AND PRACTICE IN PUBLIC AND PRIVATE LIFE
- Edited by Theodore M. Porter, University of California, Los Angeles, Dorothy Ross, The Johns Hopkins University
-
- Book:
- The Cambridge History of Science
- Published online:
- 28 March 2008
- Print publication:
- 04 August 2003, pp 553-564
-
- Chapter
- Export citation
-
Summary
Since the eighteenth century, economic science has been punctuated by debates on the relation between state and market. Its history has been marked by a succession of doctrines and political constellations, more or less interrelated. They have usually been understood historically in relation to dominant ideas and institutional practices: mercantilism, planism, liberalism, the welfare state, Keynesianism, and neoliberalism. Whatever their dominant orientations, the various states gradually constructed systems of statistical observation. Yet the development of these statistical systems has generally been presented as a sort of inevitable and univocal progress, having little relation to the evolution of the variegated doctrines and practices of state direction and guidance of the economy. The historiography of economic thought, or more precisely, historical works dealing with the reciprocal interactions between the state and economic knowledge, has placed little emphasis upon the modes of statistical description specific to various historical configurations of state and market. In a word, these two histories, that of political economy and that of statistics, are rarely presented, much less problematized, together.
The reason for this gap in economic historiography is simple. Statistics has historically been perceived as an instrument, a subordinate methodology, a technical tool providing empirical validation for economic research and its political extensions. According to this “Whig” conception of the progress of science and its applications, statistics (understood as the production both of information and of the mathematical tools used to analyze that information) progresses autonomously relative to economic doctrine and practice. It is for this reason that the historical specificity of statistics is neglected in the historiography of economic science, and left unproblematized.
Three Studies on the History of Sampling Surveys: Norway, Russia-USSR, United States
- Alain Desrosières
-
- Journal:
- Science in Context / Volume 15 / Issue 3 / September 2002
- Published online by Cambridge University Press:
- 14 January 2003, pp. 377-383
- Print publication:
- September 2002
-
- Article
- Export citation
-
The object of sampling surveys is to evaluate variables characterizing an aggregate (the “whole”) through the observation of only a fraction of that whole, the “sample.” Though these surveys do focus on sociological or economic variables, it is because of opinion polls – often referred to as Gallup polls, from the name of the American businessman who first applied them to election forecasts and market surveys – that the investigative techniques involved in sampling surveys gained their claim to fame. The identification with this particular type of survey has become so strong that the French word for sampling survey, sondage, which, for statisticians, designates that survey method which substitutes the “part” for the “whole” (sampling) has become, for the general public, synonymous with the term enquête d’opinion (poll), to such an extent that any controversy on sondages now focuses on the scientific validity of the concept of “opinion” rather than on the legitimacy of extrapolating “from the part to the whole.” This technique, however, now perfectly well codified thanks to probabilistic techniques and the computation of “confidence intervals” has a complex history, which predates the Gallup method of the 1930s. The probabilistic justification of the method’s legitimacy stems from a number of developments in different survey techniques, focusing on “typical cases,” “examples,” and, subsequently, on “purposive” sampling techniques, as opposed to “random” sampling. This legitimization, relying on probabilities, has not, however, gained general acceptance, since even nowadays, the so-called “quota” method does not follow the canons of random selection and of confidence intervals.